IS

Dutta, Kaushik

Topic Weight Topic Terms
0.308 architecture scheme soa distributed architectures layer discuss central difference coupled service-oriented advantages standard loosely table
0.306 set approach algorithm optimal used develop results use simulation experiments algorithms demonstrate proposed optimization present
0.183 database language query databases natural data queries relational processing paper using request views access use
0.123 response responses different survey questions results research activities respond benefits certain leads two-stage interactions study
0.107 channel distribution demand channels sales products long travel tail new multichannel available product implications strategy

Focal Researcher     Coauthors of Focal Researcher (1st degree)     Coauthors of Coauthors (2nd degree)

Note: click on a node to go to a researcher's profile page. Drag a node to reallocate. Number on the edge is the number of co-authorships.

Datta, Anindya 2 VanderMeer, Debra 2 Liang, Qianhui 1
caching 1 Database clusters 1 design research 1 request distribution 1
service-oriented architecture 1 SOA 1 task allocation 1 XML 1

Articles (2)

SOA Performance Enhancement Through XML Fragment Caching. (Information Systems Research, 2012)
Authors: Abstract:
    Organizations are increasingly choosing to implement service-oriented architectures to integrate distributed, loosely coupled applications. These architectures are implemented as services, which typically use XMLbased messaging to communicate between service consumers and service providers across enterprise networks. We propose a scheme for caching fragments of service response messages to improve performance and service quality in service-oriented architectures. In our fragment caching scheme, we decompose responses into smaller fragments such that reusable components can be identified and cached in the XML routers of an XML overlay network within an enterprise network. Such caching mitigates processing requirements on providers and moves content closer to users, thus reducing bandwidth requirements on the network as well as improving service times. We describe the system architecture and caching algorithm details for our caching scheme, develop an analysis of the expected benefits of our scheme, and present the results of both simulation and case studybased experiments to show the validity and performance improvements provided by our caching scheme. Our simulation experimental results show an up to 60% reduction in bandwidth consumption and up to 50% response time improvement. Further, our case study experiments demonstrate that when there is no resource bottleneck, the cache-enabled case reduces average response times by 40%-50% and increases throughput by 150% compared to the no-cache and full message caching cases. In experiments contrasting fragment caching and full message caching, we found that full message caching provides benefits when the number of possible unique responses is low while the benefits of fragment caching increase as the number of possible unique responses increases. These experimental results clearly demonstrate the benefits of our approach.
A COST-BASED DATABASE REQUEST DISTRIBUTION TECHNIQUE FOR ONLINE E-COMMERCE APPLICATIONS. (MIS Quarterly, 2012)
Authors: Abstract:
    E-commerce is growing to represent an increasing share of overall sales revenue, and online sales are expected to continue growing for the foreseeable future. This growth translates into increased activity on the supporting infrastructure, leading to a corresponding need to scale the infrastructure. This is difficult in an era of shrinking budgets and increasing functional requirements. Increasingly, IT managers are turning to virtualized cloud providers, drawn by the pay-for-use business model. As cloud computing becomes more popular, it is important for data center managers to accomplish more with fewer dollars (i.e., to increase the utilization of existing resources). Advanced request distribution techniques can help ensure both high utilization and smart request distribution, where requests are sent to the service resources best able to handle them. While such request distribution techniques have been applied to the web and application layers of the traditional online application architecture, request distribution techniques for the data layer have focused primarily on online transaction processing scenarios. However, online applications often have a significant read-intensive workload, where read operations constitute a significant percentage of workloads (up to 95 percent or higher).In this paper, we propose a cost-based database request distribution (C-DBRD) strategy, a policy to distribute requests, across a cluster of commercial, off-the-shelf databases, and discuss its implementation. We first develop the intuition behind our approach, and describe a high-level architecture for database request distribution. We then develop a theoretical model for database load computation, which we use to design a method for database request distribution and build a software implementation. Finally, following a design science methodology, we evaluate our artifacts through experimental evaluation. Our experiments, in the lab and in production-scale systems, show significant improvement of database layer resource utilization,demonstrating up to a 45 percent improvement over existing request distribution techniques